Author Archives: Zanthra

A letter to Donald Trump.

Dear President Trump,

Please help protect consumers in this information age.

The internet is an extremely complex entity of interconnected data links, each owned by different organizations and corporations. Most Americans however only have one or two choices for who’s links come to their homes and apartments.

By lobbying at all levels, these ISPs do everything they can to block or delay any competitive broadband services such as municipal broadband from reaching Americans. If their customers don’t have any other choice, they can charge higher prices, and reduce prices other places to drive out competition.

As most of the ISPs also offer Television packages, this gives them a great conflict of interest in providing fair access to other services online. If customers can’t get access to high quality media online, they are more likely to purchase the Television package from the ISP alongside the internet service. This could lead to ISP degrading the connection quality to such competing services or blocking them completely.

If you only have one or two data links to your home, owned by these ISPs, there is a very high potential for these companies to abuse their customers, many of which will have no idea how much they are being taken advantage of.

So please President Trump, if you do want to act for the people, take action to help protect our connections to this ever more important Internet from being controlled by those who “own” the wires running to our homes.

Thank you,
Scott

Is there room for an ehttp:// protocol?

Lets say you run a small website. It does not make you any money, so you really can’t put much money into it. Perhaps you host it for a hundred dollars a year at Amazon Web Services out of your own pocket money. Lets say also that you want to provide your visitors with the privacy of encrypted connections.

“Alright,” one might say, “HTTPS is the solution.”

So next you go and configure apache with OpenSSL, read up on configuration and hardening, and get everything set up right. You self sign a certificate, do a graceful restart of the webserver, and visit your site. Next thing you know, your browser is now telling you your website is doing something that could be harmful.

Lets just ignore for the moment that we knew that was going to happen, or why it happens, and think about what is actually different about the connection.

Before TLS was enabled, your web browser got the IP address from DNS, asked for a connection to that IP address, and trusted that whoever responded was who they wanted to talk to. Everything was transfered in clear text for any passive attacker to see. Their ISP can see it, the NSA can see it, China can probably see it, and their brother might be able to see if if he is good enough with computers.

You also don’t know that that’s actually the website you wanted, so an active attacker can change the content either direction. They can inject frames, or scripts into the website, or replace it with a redirect to a malicious site.

The good news on the other hand is that the browser is fine with it.

Checklist for Unencrypted HTTP:
Resistant to Passive Attacks: No
Resistant to Active Attacks: No
Browser Warnings: No
Cost: Nothing

Alright, well with HTTP we expect no security and we get none.

Lets see how HTTPS with a certificate issued by a valid Certificate Authority fares:

The DNS request is made unencrypted (except in exceptional configurations). This is not much of a big deal since the Server Name Identifier is usually sent in clear text before any key exchange happens, and the server has to provide the certificate for the domain in clear text, so a passive attacker knows the hostname, such as gmail.com, zanthra.com, or insertsomethingveryembarassing.com (don’t actually try to go there as someone may buy that domain sometime).

On the other hand, after the handshake, the connection is now encrypted. This greatly limits the ability of a passive attacker to gather data on the things you like to view or research.

You also have protection from an active attacker, since the key exchange relies on the browser using the public key of the host certificate to open the connection. That public key is in turn secured by cryptographic signature in a chain by the private key of some company that your browser vendor, or OS vendor, chose to trust. It makes it very difficult for an active attacker to trick you into trusting a malicious site. If users do not get warnings for sites using self signed certificates, and the browser had not gotten any headers like HPKP from an earlier visit there would be a risk.

So lets see how that stacks up.

HTTPS (with trusted Certificate):
Resistant to passive attacks: Yes
Resistant to active attacks: Yes
Browser Warnings: Green Lock Icon
Cost: 0$ – 1000$ a year

Now lets look again at HTTPS with a self signed certificate.

Against a passive attacker it still has the vulnerability of revealing the hostname, but it still has encrypted data, so there is protection against a passive attacker.

Against an active attacker the story is a bit different, and not as simple as the prior examples at all. First lets assume that you have never visited this site before, and you have no knowledge about the server’s certificate. Lets also assume that the active attacker is targeting all visitors of this particular webpage (instead of targeting all webpages visited by a particular visitor). They probably have a pregenerated certificate for that website, that they can use to impersonate it. They can return whatever page they want, and pretend to be that URL. Remember though that this same server could have done that for the standard HTTP site as well.

Against an active attacker who is targeting all the webpages visited by a particular visitor, perhaps in order to decrypt the communications that would otherwise be inaccessible to a passive attacker, the situation is only the slightest bit better. If an attacker does not go through the effort of creating and signing a new dynamic certificate for each site, then a look at the certificate details would reveal a wildcard certificate.

This does not matter much for self signed certificates as no one would really try to do this sort of attack against websites with self signed certificates because people don’t visit those sites in enough volume, and when they do there is not enough useful information to be gathered from their sessions. These attackers want your browsing habits, interests, hobbies and the like. If you have reason to be visiting a site with a self signed certificate that you would accept, you likely know something about what you are doing in regards to verifying that certficate’s identity, For example the admin page of a remote website where you can get the key signature through a side channel, such as SSH or your web hosts own TLS secured page. I note all this to contrast it to my proposed ehttp certifcate extension later.

So as the checklist:
Resistance to passive attackers: Yes
Resistance to active attackers: No
Browser Warnings: Yes
Cost: None

So to recap here, moving from unencrypted HTTP to self signed HTTPS you have gained protection from passive attackers, and gained browser warnings. These browser warnings are not there for no reason. They serve a purpose to protect against active attacks against websites that have trusted certificates,

I propose that ehttp be made a new protocol prefix to designate Encrypted Hyper Text Transport Protocol, designed to be resistant to evesdropping, both active and passive. It uses self signed or otherwise untrusted certificates, by giving up the resistance to impersonation. This makes it free to provide basic privacy to your users as a simple upgrade to sites that would otherwise be HTTP only.

The protocol itself is the same as HTTPS, but would not display the standard browser warnings for untrusted certificates. It also will not honor HPKP and HSTS headers, due to the fact that an impersonator could provide those headers with a fake certificate in order to block legitimate visits. A browser could notify the user if they visit a site using ehttp that has a trusted certificate to switch to https, but when visiting a site with the https protocol a ehttp certificate should produce at least as much warning as a self signed certificate.

Against passive attackers, this system is as resilient as HTTPS, since the encryption uses the same methods and key sizes. The hostname will be known, and the fact that it is a self signed certificate will be known, but this is still an advantage over standard HTTP.

An active attacker that wants to impersonate the server can again pregenerate a certificate, in order to look like the server at that destination. Users should be warned, perhaps by clicking on a yellow warning icon where the green lock icon normally appears, that the website may not be who they claim to be, and personal information should not be provided over insecure connections.

Against a semi-active evesdropper, one who wishes to watch all the websites a user visits by posing as the remote server, there is some protection. By restricting EHTTP to hostname certificates only, it will require the evesdropper to dynamically generate a new certificate when the user visits a new ehttp site. With standard self signed certificates this would not be a significant deterrant, as signing a certificate is a very easy action. In order to limit it then I propose a extension be added to EHTTP certificates call the EHTTP proof of work.

The EHTTP proof of work extension would be an extension that takes the valid domain name of the certificate, and concatenates it with the public key fingerprint, and a random nonce. This is hashed and a new nonce generated until it has a sufficient number of zeroes at the end of the hash’s binary string. I suggest a hash function be selected that has a relatively good parity of speed between GPUs and CPUs (relatively in the sense of relative to other hash functions), and a number of minimum zeroes be selected that requires about five minutes of computation time on a fast quad core desktop. This is similar to the proof of work used in other systems, and most well known for it’s use in BitCoins.

Because of the difficulty of quickly creating a new EHTTP certificate with a valid proof of work extension, semi-active evesdroppers would be at a disadvantage. They would be at an even larger disadvantage if the initial handshake is indistinguishable from HTTPS, because in order to remain transparent to both EHTTP and HTTPS servers, it would have to always accept the user’s TCP connections, and establish it’s own with the server and is required to relay all HTTPS sessions in software for no gain. Adding a HTTP header with a verification of the public key fingerprint could further increase this by requiring a layer 7 inspection to strip the header in the application layer. All of this is not going to guarantee an active attacker is not evesdropping, but it is far more secure than HTTP, and moderately more secure than HTTPS with self signed certificates.

On top of this, browser extensions could be made that would ask one or more third party TLS secured websites to also download the certificate for that web server (or request the certificate through a proxy like TOR). The semi-active attacker would not be able to impersonate the trusted TLS certificate, so if it presented a cached EHTTP certificate with proof of work to the client, but was unable to intercept the third party request, there would be a certificate mismatch.

So if we consider the checklist again.

Proposed EHTTP:
Resistance to passive attackers: Yes
Resistance to active attackers: Users should assume None or Minimal with third party certificate verification
Browser Warnings: No
Cost: None (besides some CPU time)

So why not just get a free trusted certificate from StartSSL like I did for this blog? Because they are still a company that needs to make money to stay in buisness. I don’t use long HSTS headers on this website for that reason. I don’t know that when it comes time to renew my certificate that they will still be offering free certificates. It also still requires registration and confirmation through a third party. In general though, most certificates are expensive for people who pay for their website out of their own money and don’t get anything in return for them.

Encrypted HTTP connections could provide a better experience for users of some public WiFi systems, as HTML injection is sometimes used to insert advertisements, and it can protect against some more malicious code injection attacks as well. It will not have any affect on corporate or school security systems that use local trusted wildcard certificates to decrypt all HTTPS traffic for content filtering or preventing data leaks.

Because EHTTP is considered a different protocol from HTTPS, it will not reduce the security of HTTPS. Browsers should not provide any indication of security on the address bar, and if the user looks for more information similar to how they would for HTTPS, they should be informed that the identity of the server cannot be verified. I suggest that if a user visits a page through HTTPS and gets a EHTTP certificate that they be presented with at least as harsh as the harshest warnings given for standard certificate errors, and if they chose to proceed anyways, the browser should not downgrade to EHTTP, but should remain HTTPS with the normal error indicatiors on the address bar. EHTTP should not be classed apart from HTTP in regards to downgrade protection, or mixed site content for HTTPS.

Site owners can use the same redirect or rewrite rules to upgrade users from HTTP to EHTTP, but EHTTP can be upgraded to HTTPS if a trusted certificate is received. Website owners should provide the proper inbound links using ehttp://, and browsers should naturally attempt ehttp before http if they would normally fall back to http if https did not respond.

If the concern of rising certificate error rates due to sites accepting HTTPS connections on the same port as EHTTP is on, then a separate port could be defined. This however allows semi-active atackers to preselect EHTTP candidates for interception and let HTTPS traffic through reducing their processing requirements.

What I have proposed is a Better Than Nothing Security protocol to enhance user privacy on the internet when compared to unencrypted HTTP. It requires a nontrivial effort for ISPs or others who may want to collect data on people’s browsing habits to circumvent. The potential attacks on EHTTP are no worse than that of unencrypted HTTP, and it does not in any way diminish the security of HTTPS using trusted certificates.

It may on the other hand cause some websites that would otherwise have gotten trusted certificates to delay or forgo a HTTPS transition, leaving those websites open to impersonation.

If Encrypted HTTP makes it sound too secure for people’s liking, perhaps AHTTP for Anonymous encryption HTTP, based on the term for the Anonymous Ciphers that are available for TLS. While I don’t propose that Anonymous Ciphers be allowed, as they provide no discernible protection against semi-active evesdroppers and could be used to downgrade any protection EHTTP/AHTTP attempts with proof of work, the term coveys the same idea that the client still does not know who the server is.

While I don’t have high hopes that this blog will really have a big impact, I do hope that this is something that will be looked at. Server hardware is at the point where the CPU power to do encryption is available even to small sites, but the effort and cost involved with getting trusted certificates is not. At the same time the requirements for doing large scale passive data collection is coming down too. With many resent discussions of online privacy and with ISPs like AT&T starting to charge customers to not be tracked, I believe that the time is right for this sort of standard.

What worries me about the lack of NAT in IPv6

I know that NATv6 does exist, and since NAT is transparent – in that no devices handling the traffic after the NAT can distinguish between NAT and non NAT traffic – anyone who wants to implement it can do so. On the other hand, many of those pushing for IPv6 hold that there is no need for NAT in IPv6, all devices can have a globally routed IPv6 address, and a stateful firewall will solve the security problems.

The first thing to note is that NAT is a Stateful Firewall. It may be a stateful firewall with a limited set of available functions, but by default, it is a very secure Stateful Firewall. As the number of IPv4 addresses allocated to each consumer internet connection is usually just one, companies selling home networking equipment were forced to put a Stateful Firewall in every consumer router. The security impact of this is such that a large number of residential internet connections have very strong protection against unsolicited internet traffic.

With IPv6 giving global addresses to every device, I can’t imagine that every low quality home router will include a Stateful Firewall for the IPv6 stack (currently even some of the high end routers don’t support a Stateful Firewall on IPv6). CPU, Memory, Programming, Testing, and Documentation time is saved. This would substantially limits the protection provided to consumer computers.

Address privacy methods, such as the temporary addresses used by Microsoft Windows, Apple OS X, and others does not solve the problem, because attackers could harvest IP Addresses from such locations as Peer to Peer software distribution (such as the Blizzard patch downloader), log files, video chat, or network sniffing. and at that point it is up to the OS’s firewall in many cases to decide whether to handle the traffic or not (and users can often figure out how to turn off their software firewall, and are reccomended to by some network and software debugging instructions).

As long as the router did have a stateful firewall, the actual NAT is less important, but could still be useful. The router will almost always have a large number of IPv6 addresses that it can use for NAT, meaning it is free to assign a random IPv6 address to every single outgoing TCP or UDP stream (this would take no more effort than IPv4 NAT), although it could confuse some servers that track the user by IP Address. There would be no concern that a port requested by UPnP will already be in use, because it could be opened on a new IPv6 address.

With or without NAT, I feel that having a strict Stateful Firewall on home routers is important for both IPv4 and IPv6. They should by default provide the same protection against unsolicited packets as an IPv4 router running NAT. I worry however, that wihout the necessity of NAT, companies that build the routers are less likely to add stateful firewalls into the IPv6 stack, and that’s something that could hurt internet security.

NSA Data Collection and Decryption

So I finally read the documents that were leaked about the NSAs wide scale network traffic collection, databases, and attack systems, and I must say I am very impressed but not surprised. There are many who are upset about the NSAs actions, and there are a few things that I feel the NSA should not be doing (inserting backdoors into commercial software or hardware), but for the most part I feel that the NSA is doing what any government should be doing.

There are several mentions over the internet that the NSA has broken SSL, SSH, IPsec, and PPTP. It should really be of little surprise that PPTP security was broken, it has been known for a long time that PPTP usually has glaring security flaws. IPsec has many different implementations, some of which are subject to certain vulnerabilities. There is little said about SSH in the documents, although it is interesting to see the NSA has no problems using SSH in their own systems, leading me to believe that properly configured SSH systems are still secure.

SSL is more interesting. There is a lot of mention about HTTPS and SSL, and I believe that it’s due to it’s common use across the internet that leads the NSA to focus so much on it. From the documents, it seems that to decrypt data over SSL, the NSA needs the private key of the host certificate. They also state that if the system uses Diffie-Hellman key exchange, a method of what’s call Forward Secrecy where a new key unknown to any eavesdropper is created on the spot and forgotten by both parties after the communication is finished, instead of the RSA key exchange, they still cannot decrypt it easily.

The key part is that this “vulnerability” has been known since the algorithms were put down on paper. The protection of the private key is of utmost importance. The vulnerability for data to be decrypted with the use of the standard key exchange has also been well known, and there has been much discussion of the need for “Forward Secrecy”. Diffie-Hellman is computationally expensive for both parties, and while a desktop computer making three or four Diffie-Hellman exchanges won’t have any trouble, a website responding to hundreds of thousands of requests has a much larger burden.

None of the documents mention any particular methods of retrieving the private key, but there are several methods. The one that requires no active involvement is to factor the modulus stored in the host certificate. This modulus is usually a composite of two large prime numbers (although the difficulty for the computer to verify that they are prime before using them leads some to be composites of more than two). The computing power needed to factor a modulus for 1024 bit keys may be something that the NSAs supercomputers would be capable of in a reasonable amount of time, especially if they have skilled mathematicians and programmers working on new methods and optimizations.

Getting the key through other means may still be easier. The NSA has an entire section devoted to attacking computers and networks which I do believe is wrong, and they certainly have individuals searching and documenting new vulnerabilities the NSA can exploit. Using these to retrieve private key files could be much easier than factoring. Buying the keys, begging for them, social engineering, or infiltration are certainly also possible.

Naturally, the NSA keeps a database of keys, as they are very valuable. With standard key exchanges, they allow both forward and retroactive decryption of all exchanges made using that key. It’s hard to fathom any form of intelligence organization not keeping a database of compromised private keys. The wide scale collection and decryption of internet traffic when keys are available and forward security is not used, is quite impressive,

There is also mention in many places that PGP is still secure from the NSA. This is true, in much the same way as SSL is. The problem is one of key distribution. Two parties communicating using a service secured by SSL are subject to the service’s protection of their private key. They have always on servers, available IP Addresses, and the NSA can usually find their physical locations. Compromise the server’s private key, and you can potentially decrypt all the messages to and from that server. PGP on the other hand is client to client, with the keys being stored by the end users. The availability of these end user devices to attacks of any sort is much lower, and a compromised PGP key only allows decryption of messages to that party. On the other hand, unlike Diffie-Hellman with SSL, PGP has no method of providing forward secrecy, as it’s inherently a one way communication. If a key is compromised, any past or future messages using that key can be decrypted.

The NSA is not the only government out there doing this. China is possibly the worst. They are known for their armies of hackers looking for any insecure system, and they have been known to hijack internet routes in order to capture data. Their capabilities in regards to decryption, key databases, data collection, username/password databases, router config databases, vpn databases, and so many other things is likely on the same or larger scale to the NSA.

There are those who are upset with the NSA over all this, but I find it hard to get mad at them for doing their job. Certainly these leaks will make the NSA’s job harder, likely to result from expanded use of Froward Secrecy, but that’s something that the internet was making baby steps towards already. The NSA’s own document mentioned a few sites, including Google, who were already using forward secrecy. This is not about hiding data from the NSA, because it’s not about hiding data from any individual entity, it is about hiding data from all third party entities.

Kemono no Souja Erin

I wanted to write about an Anime I fnished watching the other day called Kemono no Souja Erin. It has become one of my favorite Anime mostly due to the relationship between Erin and Lilan. If you are concerned about spoilers, don’t read further.

SPOILER WARNING

I have always really liked non-human creatures in stories, particularly so when they don’t act or think like humans. The Oujyuu had enough time devoted to them in this series to really create an intriguing species, and the relationship Erin has with Lilan brings up a lot of philosophical questions as well.

Erin’s motives for going to the Oujyuu center revolve around her truly wanting to help make the Oujyuu’s lives better. She refuses to use the mute whistle, an ultrasonic whistle capable of paralyzing the Oujyuu, but which the Oujyuu grow to fear. Erin’s ability to communicate with Lilan is based on their trust of each other, but that comes at a very high price. Erin is almost killed by Lilan once when she gets startled over a hair brush, and her steadfast abhorrence of using the mute whistle does lead to another character nearly getting killed, and Erin losing three fingers, which leads her to use the mute whistle herself to stop Lilan as much as it pains her to do so.

Interestingly, despite Erin’s strong belief that using the whistle against an Oujyuu would destroy the trust between them, it seemed that Lilan did not truly remain upset with Erin for long. Lilan was willing to trust Erin and her friend Ial enough to conceal Ial from their political enemy. Why did Lilan still trust Erin despite using the mute whistle against her, and several threats to use it after that? I think perhaps that Lilan understood how much it hurt Erin to see the whistle used, and that Erin still cared deeply for Lilan.

Perhaps even more interesting is that at the end when Lilan was mad with the bloodlust for Touda, Erin was able to break her out of it with the threat of using the Mute whistle. Even Je, the queen of a group of people who had an almost symbiotic relationship with the Oujyuu long before when the Anime takes place, had been unable to stop her own friend Luke from that same bloodlust despite likely having decades of friendship between them. Did Lilan truly want to kill the Touda, or was the bloodlust driving her mad in a way in which Erin’s threat had freed her from?

In the end, Lilan didn’t care what Erin had done or threatened to do, and risked her own life to save Erin.

Perhaps the most interesting question is did Je’s people know about or use Mute whistles? It’s clear in the story that Lilan nearly unintentionally killed Erin twice by not understanding the consequences of what she was doing. As large as the Oujyuu are, it’s hard to imagine that it’s only the rules about how to raise the Ojyuu that would lead to such problems.

I think it perhaps likely that Je’s people did use the mute whistles when necessary, understanding that they were unpleasant at best for the Oujyuu, and the Oujyuu understood that necessity. When Je came to form the new Kingdom however, she wrote the rules for it to be used instead of forming real relationships with the Oujyuu.

Erin perhaps hated the whistle because it was used for that purpose. It was that which she was referring to when she said that a Oujyuu would not become friends with a human who uses a whistle against them.

Erin is not seen carrying a Mute Whistle in the closing scene with her child and Owl, but I certainly was not expecting them to show that. Her openly carrying the whistle around her neck in the last episodes was a symbol of the lack of freedom for her, Lilan, Eku, and Owl. The lack of it was an indication of their freedom. Erin is not likely however to forget the hard learned lessons about the danger the Oujyuu present to humans. If Lilan, Eku, and Owl freely decided to stay with Erin, I could certainly believe she may still carry one.

Fun with virtual machines

I have been a big fan of virtualization for a long time.  I like experimenting with computers, operating systems, servers and the like, and virtualization makes it really easy.

Some time ago, the early Killer NICs basically included an embedded Linux system that could provide various network services,  I have been using Windows 8 and the Client Hyper-V since Windows 8 was released, and it has the ability to create a similar configuration through Virtual Machines.

In Hyper-V there are three different types of virtual network switches, the first is External, which bridges the Hardware network card with any virtual machines on the switch, it also provides the option to create a virtual link on the Host so that it can still connect to the external network.

The second is Internal, which provides both the virtual machines and the host with links to the switch, but does not connect to the external network.

The last is a private switch which only connects the virtual machines.

All of these switches work more or less like physical switches, and the idea here is to create a Virtual Machine to bridge the Host, on an Internal Switch, to the rest of the network on an External Switch.  As the traffic passes through this virtual machine, it can be analyzed, processed, or otherwise managed.

For my experiment I used FreeBSD, in particular the FreeBSD 10 stable amd64 snapshot for Hyper-V found in the appropriate section here: ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/VM-IMAGES/ . I chose FreeBSD because of the reputation for security, and high performance networking, along with the ipfw built into the kernel.

I created a virtual External switch attached to my hardware network adapter, without connecting the host to it.  I then created an internal network adapter with a host connection.  I attached both of these switches to the FreeBSD virtual machine.  In the Hyper-V advanced settings for these network adapters, the “Enable MAC Address Spoofing” must be enabled.

In FreeBSD’s rc.conf file, I used the following to bridge the network adapters, and enable ipfw.

 cloned_interfaced="bridge0"
 ifconfig_bridge0="addm hn0 addm hn1 SYNCDHCP"
 ifconfig_hn0="up"
 ifconfig_hn1="up"
 firewall_enable="YES"
 firewall_script="/etc/ipfw.rules"

You do not have to use the SYNCDHCP option, and by doing so you will end up with a FreeBSD machine with no Layer 3 network connection of it’s own, but can still be configured through the Hyper-V manager.  You can install and use a variety of other network management, intrusion detection, packet filtering, packet capturing, or packet modification tools you like.  Other software such as bind or squid could be run on the virtual machine as well to provide things like Ad blocking.  A firewall configured this way is completely transparent to the Windows operating system.

Do keep in mind that this will not protect you from anyone who has administrative privileges on the Host Operating System, as with those privileges they can simply configure the External network switch in Hyper-V to include the Host operating system, and bypass the firewall.