What do you use PGP/SMIME for?

Nicholas Leippe nick at leippe.com
Wed May 7 15:27:44 MDT 2008

I'm only trying to say, that once something is 'not secure', it can't get any 
more 'not secure'. If something has some amount of security added, then there 
are degrees of security at that point--but it's no longer essentially plain 
text, shouted out for all the world to hear.

You can argue that almost nothing is secure, since there usually exists an 
amount of work 'X' that can be done to decipher the information. As our 
technology advances, the ease with which we can accomplish this becomes 
easier (botnets, rainbow tables, ready-made scripts, etc.).
Thus even when 'X' > 0, in some cases I still consider it to be 'not secure'.

For myself, I define 'not secure' as the point at which 'X' is readily 
performed by a minimally motivated entity with minimal resources to do so. 
Meaning, in short, that there is a cliff below which everything is for 
practical purposes insecure--which is why I don't deem it necessary to give 
it some sort of rating. I would include:

substitution ciphers
anything for which there exists a software tool to decipher, which includes 
such things as dvd css, other forms of drm, wpa, and obscure things like the 
enigma cipher machine
any cipher combined with an easily obtainable key

Even if there is some security, if the ease with which a common Joe can bypass 
it is readily available, then for all intents and purposes it is not secure. 
Anyone can get past css on a dvd. Sure, they can't just stick it in their 
computer and have it--they have to go find the software to do it for them 
first, but it is readily available. Thus I count that as no longer secure. 
It's not just 'low security', it's insecure, because the cost to decipher is 
within reach of just about anyone, with next to no effort.

Something encrypted by a one-time pad is insecure if anyone can get the pad. 
At that point it's just data that's been tweaked, but anyone can untweak it. 

It's just as if it had some additional packaging that needs to be torn off, 
but nothing more.

There are at least four major parts to the 'security equation':
- the risk of damage, or the value of the privacy of the plaintext (how much
  could the damage from a breach cost)
- the opportunity for the potential eavesdropper to obtain the ciphertext
- the motivation for the potential eavesdropper to obtain the plaintext (the
  value of the plaintext to the eavesdropper)
- the strength of the security measures, 'X', the inverse of the ease with
  which an eavesdropper could obtain the plaintext from the ciphertext
- a fifth could be the cost of implementing the security measures--actual
  cost, or productivity cost--how much do the security measures get in the
  way of people getting their work done, or just plain annoyance factor

When the risk is at 0, the question of security becomes moot and the rest 
doesn't matter.
When opportunity is at 0, I would consider it to be completely secure, 
regardless of any other factors.

When opportunity and the risk are both > 0, then people tend to choose the 
security measures according to the risk. Sometimes when they estimate the 
motivation of the eavesdropper to be very low and/or the opportunity to be 
very low they tend to ignore their risk and discount the need for security 
(think passwords on post-it notes at your office). A high annoyance factor 
also often leads to this.

I can concede that you could label data with low values of 'X' with a variable 
amount of 'insecure', since 'X' itself varies, but I think that below a 
certain threshold it's pointless to do so, and better to just admit and warn 
people that 'X' is so low that it really has no practical amount to it.

I guess I'm just being pedantic about an odd view of the semantics, but I do 
so because I see more value in making people think something is 'not secure' 
than letting them think that 'it has some security'--a glass half empty vs. 
half full point of view. I don't want people to be lulled into a false sense 
of security and then get stung--I'd rather them choose their actions 
knowingly and not be surprised if/when something goes wrong. The cost to the 
victim may be the same, but the cost to whomever led them to the attitude the 
victim had about it can vary greatly depending on which way the victim was 
thinking--it's about expectations.

A good example is the recent changes in the online banking security measures. 
Some of which actually *reduce* security while boldly proclaiming to the user 
that they *increase* security. This is only a step backwards IMO, and a very 
bad one at that. I imagine the cost to the banks from phishing victims might 
start to rise now that some people think they are more secure when they are 
actually less secure. I can see some lawsuits on the horizon since 
expectations have been increased while security has decreased.


More information about the PLUG mailing list