In computational complexity a fairly popular claim seen in most textbooks equates class P with practically feasible algorithms, and everything not known to be P (but usually within NP, "not known to be P" is an awkward thing we need to say until P!=NP is finally proven, even though we all know it to be true already) with not practically feasible algorithms. While it's often noted that it might be untrue for some cases, with some P algorithms being impractical, and some not known to be P algorithms to be practical, such disclaimers usually consider these cases to be contrived and not worth any further consideration.
Here's a really simple example of a genuinely useful problem in P (linear even) that's not practical at all.
Utterly impractical O(n) problem
Imagine problem of falsifying digital signature for a message in some reasonable digital signature scheme. Inputs are public key of constant length, and message of length n. The algorithm is as follows:
- Extract private key from the public key by brute force, or some other algorithm (constant time)
- Sign the message (linear time in n required just to read the message, and no more than linear time necessary)
Yet, unless someone breaks the public key system and provides vastly faster method of extracting private key based on the public key, we won't be able to solve it even for n=1.
This problem is of great practical importance - the entire online banking system would fail, and billions of dollars of fraud would happen if someone could practically solve it. So it's not fair to call utterly unfeasible P problems contrieved.
13 comments:
I don't understand this.
Dude, you're cheating. Finding out the private key by brute force probably depends linearly on the public key (exponentially in the length of the binary representation of it). By your reasoning, the complexity of any algorithm that we need to run on a fixed set of inputs is irrelevant; the program will run in constant time. Although technically true, this is useless. You want to use the complexity as a function that approximates the number of instructions the CPU will have to execute, so one has an idea about how long it will take to complete.
Anonymous: It's not cheating. The algorithm's running time is ridiculously high constant + n * tiny number, which is O(n).
You can take any algorithm for public signature, and some specific key size. The system doesn't even need to be well defined with other key sized (most are, but not all).
There's nothing theoretically wrong with this.
The entire point of the post was to show that O(n) != practical. Complaints that this isn't practical miss the point.
It is cheating.
Again,
"Extract private key from the public key by brute force, or some other algorithm" != "(constant time)"
It is exponential in n.
"You can take any algorithm for public signature, and some specific key size. The system doesn't even need to be well defined with other key sized (most are, but not all)."
No, because if you fix key size the complexity is not a function of n, it is a function of a constant. So you have a large constant plus a smaller one.
"Yet, unless someone breaks the public key system and provides vastly faster method of extracting private key based on the public key, we won't be able to solve it even for n=1."
For low strenth encryption (small values of n) brute force would be relatively fast.
"There's nothing theoretically wrong with this."
Everything is theoretically wrong with this.
Good one!
That's not how order of growth works. You can't just fix n and say that the problem is then O(1). By that argument you can make any problem O(1) by setting an upper limit for n (e.g., a trillion). The reason why this problem is not tractable is because it's exponential in n and they chose a very large n.
You get this as O(n) because you're counting "extract public key by brute force" as the primitive operation.
So, I can turn your O(n) algorithm into a constant-time algorithm: simply establish as my counted operation doing n public-key extractions.
Go back to Knuth volume 1. Do not pass go.
Seneca: It is constant time, as it does not depend on n. Brute forcing the key takes the same amount of time no matter what input length is.
Your problem has another input: whatever encryption algorithm you are using has a key of size m, and brute force cracking it is probably a search of the whole 2^m space. You are drawing bogus conclusions because you ignore m and it's what dominates your problem.
Your problem is O(n+2^m).
By your "of constant length" logic, this problem is O(1) because all inputs are constant. Just wait until you have the message, and then it's "of constant length", so O(1).
What have you been eating lately?
What you're basically saying is that the problem:
1. Wait a billion years
2. Solve an O(n) problem
is an O(n) problem.
Or simply that:
O(1) + O(n) == O(n),
even though the O(1) part has a really big constant factor.
But you're saying this in a really confusing way, implying that your so-called "constant time problem" - the problem of extracting the private key from the public key - is in fact a constant time problem. Technically, it is - with regard to the size of some other, seemingly related but completely different problem. But this does not change the fact that it still is an exponential-time, or NP-hard, problem with regard to its own size (the size of the public key).
The amount of information content in your blog post is negative. (The cat picture notwithstanding.)
Then you don't understand big-O analysis. The big-O for your algorithm is:
O(sign(msg) + factor(public key))
MAX(O(n), O(exp(m)))
where
m = key length
n = message length
O(exp(m)) dominates O(n), so this algorithm is O(exp(m)), not O(n).
Although I am familiar with the terms but never good at their analysis.This seems a good discussion about Big o analysis of this algorithm.Initially I agreed with taw's argument but Sandro Magi also seems correct so can you help me concluding.
Post a Comment